Members
Overall Objectives
Research Program
Application Domains
Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: Research Program

Linear algebra and polynomial evaluation

Linear algebra and polynomial evaluation are key tools for the design, synthesis, and validation of fast and accurate arithmetics. Conversely, arithmetic features can have a strong impact on the cost and numerical quality of computations with matrices, vectors, and polynomials. Thus, deepening our understanding of such interactions is one of our major long-term goals.

Code generation for polynomial expression evaluation

We plan to improve our work on code generation for polynomials, and to extend it to general arithmetic expressions as well as to operations typical of level 1 BLAS, like sums and dot products. Due to the intrinsic multivariate nature of such problems, the number of evaluation schemes is huge and a challenge here is to compute, and certify, in a reasonable amount of time, evaluation programs that satisfy both efficiency and accuracy constraints. To achieve this goal, we will in particular study the design of specific program transformation techniques driven by our certification tools for tight and guaranteed error bounds.

Exact linear algebra

We will pursue our work on the design and analysis of fast algorithms for exact linear algebra in three directions. First, for general matrices over a field k (algebraic complexity model), we want to improve upon existing algorithms by achieving efficiency both in terms of arithmetic cost (expressed via the exponent of matrix multiplication and the rank of the matrix) and in terms of memory usage (in-place algorithms). Second, this approach will be extended to families of structured matrices (Toeplitz-like, etc.) using the displacement rank as an additional parameter in the cost analyses; another challenge here is to move toward a complete understanding of complexity reductions from one structure to another. A third direction deals with polynomial matrices, that is, matrices over k[x]. Currently, algorithms for polynomial matrices allow to solve a few structured linear algebra problems more satisfactorily than with the classical structured linear algebra approach. Such algorithms do not have structured matrix analogues, and we thus plan to work on unifying these two seemingly different settings.

Condition numbers

A standard approach for reaching a prescribed output accuracy for the solution to a given problem is to try to compute approximate solutions to this problem using increasing precisions: for each precision, a certified error bound is computed, and the process stops when the prescribed accuracy is reached.

Combined with backward error analysis techniques, computing condition numbers is a well-known technique to obtain first-order error bounds on the computed solution to a given problem. Conversely, condition numbers can be used to estimate the precision required to obtain a prescribed output accuracy, thus accelerating the convergence of certified algorithms. Future research will focus on the computation or estimation of the conditioning of matrix factorizations (LU, QR, ...), in particular on algorithmic complexity issues and efficient software implementations. We will also investigate the use of automatic differentiation as a tool for computing condition numbers.

Iterative refinement methods for linear algebra

Another direction deals with improving the efficiency and quality of self-validating methods for computing error bounds at run time. The starting point is the result of a floating-point computation, like linear system solving. We aim at computing a bound on the error between that approximate result and the exact result, using interval arithmetic to get an enclosure. We believe that the methods of choice are iterative refinement methods: such methods are contractant, and thus particularly well-suited for interval computations. However, it is wise to use optimized floating-point routines for linear algebra, to reach the performances achieved in high-performance computing. Again, this work covers all aspects, from the manual proof of convergence to efficient implementation.

High performance linear algebra and links with Euclidean lattice reduction

Our theoretical studies on linear algebra will be applied to the design of high performance building blocks, scientific computing/computer algebra patterns, and linear algebra algorithms. Our aim in software design is especially to transfer our future research results on: the interplay between bit complexity and algebraic complexity; the interplay between exact computing and approximate (or certified) computing; asymptotically fast algorithms. Current lattice basis reduction algorithms heavily rely on fast linear algebra. High performance basis reduction will be one of our main directions.